-
Notifications
You must be signed in to change notification settings - Fork 1.1k
feat: Add Gemma3 chat handler (#1976) #1989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…in Gemma3ChatHandler
i've been using it a bit it works nicely, had to find out the message structure but maybe that's normal for different chat handlers. I'm not that familiar with llama-cpp "type": "image",
"image": {
"url": "https://image.com/img.jpg",
} i was used to "image_url" for both places "image_url" is used now. |
How would that work with a local image? |
Sorry, i didn't modify the origin chat template of gemma3 and then used Here is a full example: from pathlib import Path
from llama_cpp import Llama
from llama_cpp.llama_chat_format import Gemma3ChatHandler
def image_to_base64_uri(image: bytes | str):
import base64
import urllib.request as request
if isinstance(image, bytes):
data = base64.b64encode(image).decode('utf-8')
else:
with request.urlopen(image) as f:
data = base64.b64encode(f.read()).decode('utf-8')
return f'data:image/png;base64,{data}'
chat_handler = Gemma3ChatHandler(clip_model_path='path/to/mmproj')
llama = Llama(
model_path='path/to/model',
chat_handler=chat_handler,
n_ctx=2048, # n_ctx should be increased to accommodate the image embedding
)
messages = [
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'please compare these pictures'},
{'type': 'image_url', 'image_url': 'https://xxxx/img1.jpg'},
{'type': 'image_url', 'image_url': {'url': 'https://xxxx/img2.png'}},
{'type': 'image_url', 'image_url': image_to_base64_uri(Path('path/to/img3.jpg').read_bytes())},
{'type': 'image_url', 'image_url': {'url': image_to_base64_uri(Path('path/to/img4.png').read_bytes())}},
{'type': 'text', 'text': 'and then tell me which one looks the best'},
]
}
]
output = llama.create_chat_completion(
messages,
stop=['<end_of_turn>', '<eos>'],
max_tokens=500,
stream=True,
)
for chunk in output:
delta = chunk['choices'][0]['delta']
if 'role' in delta:
print(delta['role'], end=':\n')
elif 'content' in delta:
print(delta['content'], end='')
llama._sampler.close()
llama.close() |
bump on this, thanks for your work! gemma3 is a great model to have support to, I'm waiting on it! |
Hey @kossum just wondering, does this handler support function calling? I ask because the handler for llava1.5 does support multimodal (vision) and also tool calling at once, as Gemma3 also has tool calling capabilities, it would be great to add both into a single handler! |
Hello @joaojhgs, gemma3 (especially the 12b and 27b versions) has strong instruction-following abilities and can generate structured function call outputs through well-designed prompts. But unlike gpt4 or claude, gemma3 does not have builtin support for tool call tokens or json schema enforcement. That means:
So to implement function calling with gemma3, you must rely on carefully designed prompts to guide the model in producing the correct format. Simple example: import json
from llama_cpp import Llama
from llama_cpp.llama_chat_format import Gemma3ChatHandler
chat_handler = Gemma3ChatHandler(clip_model_path='path/to/mmproj')
llama = Llama(
model_path='path/to/model',
chat_handler=chat_handler,
n_ctx=2048, # n_ctx should be increased to accommodate the image embedding
)
def analyze_image(image_id: str, description: str):
print('image_id:', image_cache.get(image_id))
print('description:', description)
...
image_cache = {'img_01': 'https://xxxx/img_01.jpg'}
function_table = {'analyze_image': analyze_image}
# input arg1
image_id = 'img_01'
# input arg2
question = f'Here is the image with ID `img_01`. Please analyze it.'
output = llama.create_chat_completion(
[
{
'role': 'system',
'content': '''You can call the following function:
- analyze_image(image_id: str, description: str)
You will be shown an image. First, analyze and describe its content in detail.
Then, return a function call with:
- the assigned image_id (provided in the input)
- a description of what the image shows (your own analysis)
Respond only with a JSON (without code blocks) function call like:
{
"function": "analyze_image",
"arguments": {
"image_id": "<image id>",
"description": "<description of the image>"
}
}
'''
},
{
'role': 'user',
'content': [
{'type': 'text', 'text': question},
{'type': 'image_url', 'image_url': image_cache[image_id]},
]
}
],
stop=['<end_of_turn>', '<eos>'],
max_tokens=500,
)
data = json.loads(output['choices'][0]['message']['content'])
result = function_table[data['function']](**data['arguments'])
... Naturally, if multimodal capabilities aren’t needed, this chat handler can be omitted. |
Thanks, I didn't know about that! |
Added gemma3 chat handler, and fixed the image embedding, supports multiple images.
Included llamacpp functions and structures:
Usage:
Test Results:
unsloth/gemma-3-4b-it-GGUF
,unsloth/gemma-3-12b-it-GGUF
,unsloth/gemma-3-27b-it-GGUF
,bartowski/google_gemma-3-12b-it-GGUF
Compatibility: